47 research outputs found

    A Note on Selling Optimally Two Uniformly Distributed Goods

    Full text link
    We provide a new, much simplified and straightforward proof to a result of Pavlov [2011] regarding the revenue maximizing mechanism for selling two goods with uniformly i.i.d. valuations over intervals [c,c+1][c,c+1], to an additive buyer. This is done by explicitly defining optimal dual solutions to a relaxed version of the problem, where the convexity requirement for the bidder's utility has been dropped. Their optimality comes directly from their structure, through the use of exact complementarity. For c=0c=0 and c0.092c\geq 0.092 it turns out that the corresponding optimal primal solution is a feasible selling mechanism, thus the initial relaxation comes without a loss, and revenue maximality follows. However, for 0<c<0.0920<c<0.092 that's not the case, providing the first clear example where relaxing convexity provably does not come for free, even in a two-item regularly i.i.d. setting

    Bounding the Optimal Revenue of Selling Multiple Goods

    Full text link
    Using duality theory techniques we derive simple, closed-form formulas for bounding the optimal revenue of a monopolist selling many heterogeneous goods, in the case where the buyer's valuations for the items come i.i.d. from a uniform distribution and in the case where they follow independent (but not necessarily identical) exponential distributions. We apply this in order to get in both these settings specific performance guarantees, as functions of the number of items mm, for the simple deterministic selling mechanisms studied by Hart and Nisan [EC 2012], namely the one that sells the items separately and the one that offers them all in a single bundle. We also propose and study the performance of a natural randomized mechanism for exponential valuations, called Proportional. As an interesting corollary, for the special case where the exponential distributions are also identical, we can derive that offering the goods in a single full bundle is the optimal selling mechanism for any number of items. To our knowledge, this is the first result of its kind: finding a revenue-maximizing auction in an additive setting with arbitrarily many goods

    The Anarchy of Scheduling Without Money

    Get PDF
    We consider the scheduling problem on n strategic unrelated machines when no payments are allowed, under the objective of minimizing the makespan. We adopt the model introduced in [Koutsoupias 2014] where a machine is bound by her declarations in the sense that if she is assigned a particular job then she will have to execute it for an amount of time at least equal to the one she reported, even if her private, true processing capabilities are actually faster. We provide a (non-truthful) randomized algorithm whose pure Price of Anarchy is arbitrarily close to 1 for the case of a single task and close to n if it is applied independently to schedule many tasks, which is asymptotically optimal for the natural class of anonymous, task-independent algorithms. Previous work considers the constraint of truthfulness and proves a tight approximation ratio of (n+1)/2 for one task which generalizes to n(n+1)/2 for many tasks. Furthermore, we revisit the truthfulness case and reduce the latter approximation ratio for many tasks down to n, asymptotically matching the best known lower bound. This is done via a detour to the relaxed, fractional version of the problem, for which we are also able to provide an optimal approximation ratio of 1. Finally, we mention that all our algorithms achieve optimal ratios of 1 for the social welfare objective

    Robust Revenue Maximization Under Minimal Statistical Information

    Full text link
    We study the problem of multi-dimensional revenue maximization when selling mm items to a buyer that has additive valuations for them, drawn from a (possibly correlated) prior distribution. Unlike traditional Bayesian auction design, we assume that the seller has a very restricted knowledge of this prior: they only know the mean μj\mu_j and an upper bound σj\sigma_j on the standard deviation of each item's marginal distribution. Our goal is to design mechanisms that achieve good revenue against an ideal optimal auction that has full knowledge of the distribution in advance. Informally, our main contribution is a tight quantification of the interplay between the dispersity of the priors and the aforementioned robust approximation ratio. Furthermore, this can be achieved by very simple selling mechanisms. More precisely, we show that selling the items via separate price lotteries achieves an O(logr)O(\log r) approximation ratio where r=maxj(σj/μj)r=\max_j(\sigma_j/\mu_j) is the maximum coefficient of variation across the items. If forced to restrict ourselves to deterministic mechanisms, this guarantee degrades to O(r2)O(r^2). Assuming independence of the item valuations, these ratios can be further improved by pricing the full bundle. For the case of identical means and variances, in particular, we get a guarantee of O(log(r/m))O(\log(r/m)) which converges to optimality as the number of items grows large. We demonstrate the optimality of the above mechanisms by providing matching lower bounds. Our tight analysis for the deterministic case resolves an open gap from the work of Azar and Micali [ITCS'13]. As a by-product, we also show how one can directly use our upper bounds to improve and extend previous results related to the parametric auctions of Azar et al. [SODA'13]

    The VCG mechanism for Bayesian scheduling

    Get PDF
    We study the problem of scheduling m tasks to n selfish, unrelated machines in order to minimize the makespan, where the execution times are independent random variables, identical across machines. We show that the VCG mechanism, which myopically allocates each task to its best machine, achieves an approximation ratio of (Formula presented). This improves significantly on the previously best known bound of (Formula presented) for prior-independent mechanisms, given by Chawla et al. [STOC’13] under the additional assumption of Monotone Hazard Rate (MHR) distributions. Although we demonstrate that this is in general tight, if we do maintain the MHR assumption, then we get improved, (small) constant bounds for m ≥ n ln n i.i.d. tasks, while we also identify a sufficient condition on the distribution that yields a constant approximation ratio regardless of the number of tasks
    corecore